Serverless Architecture Conference Blog

With AWS Lambda to Multicloud

Function as a Service with AWS Lambda and Knative

Mar 4, 2021

In the future, many companies will try to grow their IT infrastructure with the help of the cloud or even move it completely to the cloud. Larger enterprises often call for multicloud. In terms of serverless, there are a few ways to achieve multicloud operation. Using AWS Lambda, a function can be made available and the whole thing can be made cloud-independent with Knative.

Multicloud, what is that exactly? Multicloud is the use of multiple cloud providers/platforms with the particularity that it feels like a single cloud to the user. Mostly, people try to get into this evolutionary stage of cloud computing in order to achieve independence from individual cloud providers.

STAY TUNED!

Learn more about Serverless Architecture Conference

Using multiple cloud providers increases resilience and availability, and of course, enables the use of technologies that individual cloud providers do not provide. An example of this would be that it becomes relatively difficult to deploy your Alexa Skill on the cloud if you have decided to use Microsoft Azure as your provider. Also, a multicloud solution gives us the opportunity to host applications with a high demand for computing power, storage requirements, and network performance on a cloud provider that meets these requirements. In turn, less critical applications can be hosted by a lower-cost provider to reduce IT costs.

Of course, the multicloud does not only bring advantages. The use of multiple cloud providers makes the design of the infrastructure much more complex and difficult to manage. The number of error sources can increase and the administration for billing the individual cloud providers becomes more complex.

One should carefully compare these advantages and disadvantages before making a decision. If you find that you don’t have to fear becoming dependent on a cloud provider, you should invest the effort in using the cloud services instead.

What is Function as a Service?

In 2014, the Function as a Service (FaaS) concept appeared on the market for the first time. At that time, the concept was introduced by hook.io. In the following years, all the big players in IT jumped on the bandwagon with, for example, AWS Lambda, Google Cloud Functions, IBM OpenWhisk, and even Microsoft Azure Functions. The characteristics of such a function are:

  • Server, network, operating system, storage, etc. are abstracted from the developer.
  • Billing is usage-based and per-second.
  • FaaS is stateless, i.e. a database or file system is required to hold data or state.
  • It is very scalable.

But what are the advantages of all this? Probably the biggest advantage is that the developer no longer has to worry about the infrastructure, but only has to address individual functions. The services are very scalable and enable precise, usage-based billing. This makes it possible to achieve maximum transparency in product costs. The logic of the application can be divided into individual functions, making it much more flexible when implementing additional requirements. The functions can be used in different scenarios. Frequently found are:

  • Web requests,
  • scheduled jobs and tasks,
  • Events
  • and manually started tasks

FaaS with AWS Lambda

As a first step, let’s create a new Maven project for ourselves. In order to use the AWS Lambda-specific functionalities, we need to add the dependency seen in Listing 1 to our project.

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-lambda-java-core</artifactId>
  <version>1.2.0</version>
</dependency>

The next step is to implement a handler that receives the request and returns a response to the caller. There are two in the core dependency: the RequestHandler and the RequestStreamHandler. We will use the RequestHandler in our sample project and declare inbound and outbound as strings (Listing 2).

public class LambdaMethodHandler implements RequestHandler<String, String> 
{

  public String handleRequest(String input, Context context) {
    context.getLogger().log("Input: " + input);
    return "Hello World " + input;
  }
}

If you then execute a Maven build, a .jar file is generated, which can then be deployed at a later time. It can already be seen very well here that the LambdaDependency creates a fixed wiring to AWS. As already mentioned at the beginning, this does not have to be bad, but this decision should be made consciously. If one now wants to serve an AWS-specific database with the function, this coupling grows stronger.

Deployment of the function: There are several ways to deploy a Lamba function, either manually via the AWS console or automated via a CI server. For this sample project, the automated way using Travis CI was chosen.

Travis is a cloud-based CI server that supports different languages and target platforms. The big advantage of Travis is that it can be connected to your own GitHub account within seconds. What do we need for this? Well, Travis is configured via a file called .travis.yaml. In our case, we need Maven for the build as well as the connection to our Lambda Function hosted on AWS for the deployment.

As you can see from the configuration in Listing 3, you need the following configuration parameters for a successful deployment.

language: java
jdk:
- openjdk8
script: mvn clean install
deploy:
  provider: lambda
  function_name: SimpleFunction
  region: eu-central-1
  role: arn:aws:iam::001843237652:role/lambda_basic_execution
  runtime: java8
  handler_name: de.developerpat.handler.LambdaMethodHandler::handleRequest
  access_key_id:
    secure: {Your AccessKey}
  secret_access_key:
    secure: {Your SecretAccessKey}
  • Provider: In the Travis deploy step, this is the target provider on which to deploy.
  • Runtime: The runtime that is needed to execute the deployment.
  • Handler name: Here we need to specify our request handler, with package, class and method names.
  • Amazon Credentials: Last but not least, we need to populate the credentials so that the build can deploy the function. This can be done with the following commands:
    • AccessKey: travis encrypt “Your AccessKey” -add deploy.access_key
    • Secret_AccessKey: travis encrypt “Your SecretAccessKey” -add deploy.secret_access_key
    • If these credentials are not passed through the configuration, Travis looks for the environment variables AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.

Interim conclusion: The effort to provide a lambda function is relatively low. The interfaces to the Lambda implementation are clear and easy to understand. Since AWS provides the Lambda runtime à la SaaS, we do not have to worry about installation and configurations, and get some services for operation (e.g. logging and monitoring) out of the box. Of course, this is a very strong commitment to AWS. If, for example, you want to switch to Microsoft Azure or the Google Cloud for some reason, you have to migrate the function and adapt the code accordingly.

FaaS with Knative

The open source project Knative was launched in 2018. The founding fathers were Google and Pivotal, but now all the IT celebrities such as IBM and Red Hat are also playing there. The Knative framework is based on Kubernetes and Istio, which provide the application environment (based on containers) and advanced network routing. Knative extends Kubernetes with a set of middleware components that are essential for building modern and container-based applications. Because Knative is built on top of Kubernetes, it is possible to host applications locally, in the cloud, or in a third-party data center.

Pre-Steps: Whether AWS, Microsoft, IBM or Google – by now every major cloud provider offers a “managed Kubernetes”. For testing purposes, you can also simply use a local Minikube or Minishift. For my use case, I use the Minikube provided with Docker for Windows, which can be easily activated via the Docker UI (Fig. 1).



Fig. 1: Enabling Kubernetes via Docker UI.

 

However, the default minikube alone unfortunately doesn’t do us any good yet. So how do we install Knative now? Before we can install Knative, we first need Istio. There are two ways to install Istio and Knative, one automated and one manual.

In the Knative documentation, there are individual installation steps for different cloud providers. It should be noted that the part specific to the cloud provider is limited to the provisioning of a Kubernetes cluster. After installing Istio, the procedure is the same for all cloud providers. The manual steps required for this can be found in the Knative documentation on GitHub.

For the automated way, Pivotal has released a framework called riff. We will also come across this later in development. riff is intended to simplify the development of Knative applications and support all core components. At the time of writing this article, it is available in version 0.2.0 and requires a kubectl configured for the correct cluster. Once you have this, you can use the command in Listing 4 to install Istio and Knative via the CLI.

>>riff system install

Caution! If you install Knative locally, you must add the –node-port parameter. After running this command, the message riff system install completed successfully should appear within a few minutes.

Now we have set up our application environment with Kubernetes, Istio and Knative. From my point of view, thanks to riff, relatively fast and easy.

Developing a function: There are now a lot of supported programming languages for developing a function that is then hosted on Knative. A few sample projects can be found on Knative’s GitHub presence. I chose to use the Spring Cloud Function project for my use case. This specifically supports delivering business value as functions, but still brings all the benefits of the Spring universe (autoconfiguration, dependency injection, metrics, etc.).

The first step is to add the dependency described in Listing 4 to your pom.xml.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-function-web</artifactId>
</dependency>

Once we have done that, we can now write our small function. To make the Lambda Function and the Spring Function comparable, we will implement the same logic. Since our logic is very minimal, I will implement it in SpringBootApplication itself. Of course, as you are used to with Spring, you can also use Dependency Injection normally. To implement the function, we use the java.util.function.Function class (Listing 5). This is returned as an object of our hello method. It is important that the method is provided with the @Bean annotation, otherwise there will be no publication of the endpoint.

package de.developerpat.springKnative;
 
import java.util.function.Function;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
 
@SpringBootApplication
public class SpringKnativeApplication {
 
  @Bean
  public Function<String, String> hello(){
    return value -> new StringBuilder("Input: " + value).toString();
  } 
 
  public static void main(String[] args) {
    SpringApplication.run(SpringKnativeApplication.class, args);
  }
} 

Knative Deployment: For the deployment of our function to be successful, we must first initialize our namespace with riff. This stores the username and secret of our Docker registry so that when the function is created, the image can be pushed. This is because we don’t have to write the image ourselves, riff conveniently does that for us. Some CloudFoundry buildpacks are used for this. In my case, I use Docker Hub as the Docker Registry. The initialization can be done with the following command:

>>riff namespace init default --dockerhub $DOCKER_ID

If you want to initialize a namespace other than the default, you simply change the name. Now we want to deploy our function. Of course, we have checked everything into our GitHub repository and now we want to build this state and then deploy it. After riff has built our Maven project, we want to create a Docker image and push it to our Docker Hub repository. The following command will do that for us:

 
>>riff function create springknative --git-repo 
https://github.com/developerpat/spring-cloud-function.git --image developerpat/springknative:v1 –-verbose

If we want to do the whole thing from a local path, we replace –git-repo with –local-path and the corresponding path. If you now look at the course of the command, you can see that the project is analyzed, built with the correct buildpack and at the end the finished Docker image is pushed and deployed.

Calling the function: Now we want to test – as it is also relatively easy to do on AWS via the console – whether the call against our function works. We can do this quite easily thanks to our riff CLI as follows:

>>riff service invoke springknative --text -- -w '\n' -d Patrick 
curl http://localhost:32380/ -H 'Host: springknative.default.example.com' -H 
'Content-Type: text/plain' -w '\n' -d Patrick 
Hello Patrick

With the invoke command you can generate and execute a curl as desired. As you can see now, our function runs perfectly.

Can Knative keep up with an integrated Amazon Lambda?

Due to the fact that Knative is based on Kubernetes and Istio, some functionalities are already available natively:

Kubernetes:

  • Scaling
  • Redundancies
  • Rolling out and back
  • Health Checks
  • Service Discovery
  • Config and Secrets
  • Resilience

Istio:

  • Logging
  • Tracing
  • Metrics
  • Failover
  • Circuit Breaker
  • Traffic Flow
  • Fault Injection

This range of functionality is already very close to the functionality of a Function-as-a-Service solution like AWS Lambda. However, there is one drawback: There are no UIs to use the functionality. If you want to have the monitoring information visualized in a dashboard, you have to set up a solution for it yourself (e.g. Prometheus). There are some tutorials and help available in the Knative documentation (/*TODO*/). With AWS Lambda, you get these UIs out of the box and don’t have to worry about anything.

Multicloud capability

All major cloud providers now offer a managed Kubernetes. Since Knative uses Kubernetes as its base environment, it is completely independent of the Kubernetes provider. So, it’s no problem to migrate your applications from one environment to another in a tremendously short time. The biggest effort is to change the configuration of the target environment during deployment. Based on these facts, simple migration and exit strategies can be developed.

STAY TUNED!

Learn more about Serverless Architecture Conference

Conclusion

Knative does not comply with all points from the FaaS manifesto. So, is it FaaS at all? From my personal point of view, yes. For me, the most important point from the FaaS manifest is that no machines, servers, VMs or containers should be visible in the programming model. This point is fulfilled by Knative in conjunction with the riff CLI.

Stay tuned!
Learn more about Serverless
Architecture Conference 2020

Behind the Tracks

Software Architecture & Design
Software innovation & more
Microservices
Architecture structure & more
Agile & Communication
Methodologies & more
Emerging Technologies
Everything about the latest technologies
DevOps & Continuous Delivery
Delivery Pipelines, Testing & more
Cloud & Modern Infrastructure
Everything about new tools and platforms
Big Data & Machine Learning
Saving, processing & more

JOIN THE SERVERLESS ARCHITECTURE COMMUNITY!